Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Heliyon ; 10(7): e29050, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38623206

ABSTRACT

Background: Anesthesiology plays a crucial role in perioperative care, critical care, and pain management, impacting patient experiences and clinical outcomes. However, our understanding of the anesthesiology research landscape is limited. Accordingly, we initiated a data-driven analysis through topic modeling to uncover research trends, enabling informed decision-making and fostering progress within the field. Methods: The easyPubMed R package was used to collect 32,300 PubMed abstracts spanning from 2000 to 2022. These abstracts were authored by 737 Anesthesiology Principal Investigators (PIs) who were recipients of National Institute of Health (NIH) funding from 2010 to 2022. Abstracts were preprocessed, vectorized, and analyzed with the state-of-the-art BERTopic algorithm to identify pillar topics and trending subtopics within anesthesiology research. Temporal trends were assessed using the Mann-Kendall test. Results: The publishing journals with most abstracts in this dataset were Anesthesia & Analgesia 1133, Anesthesiology 992, and Pain 671. Eight pillar topics were identified and categorized as basic or clinical sciences based on a hierarchical clustering analysis. Amongst the pillar topics, "Cells & Proteomics" had both the highest annual and total number of abstracts. Interestingly, there was an overall upward trend for all topics spanning the years 2000-2022. However, when focusing on the period from 2015 to 2022, topics "Cells & Proteomics" and "Pulmonology" exhibit a downward trajectory. Additionally, various subtopics were identified, with notable increasing trends in "Aneurysms", "Covid 19 Pandemic", and "Artificial intelligence & Machine Learning". Conclusion: Our work offers a comprehensive analysis of the anesthesiology research landscape by providing insights into pillar topics, and trending subtopics. These findings contribute to a better understanding of anesthesiology research and can guide future directions.

2.
Sci Rep ; 13(1): 1638, 2023 Jan 30.
Article in English | MEDLINE | ID: mdl-36717641

ABSTRACT

The black-box nature of deep neural networks (DNN) has brought to attention the issues of transparency and fairness. Deep Reinforcement Learning (Deep RL or DRL), which uses DNN to learn its policy, value functions etc, is thus also subject to similar concerns. This paper proposes a way to circumvent the issues through the bottom-up design of neural networks with detailed interpretability, where each neuron or layer has its own meaning and utility that corresponds to humanly understandable concept. The framework introduced in this paper is called the Self Reward Design (SRD), inspired by the Inverse Reward Design, and this interpretable design can (1) solve the problem by pure design (although imperfectly) and (2) be optimized like a standard DNN. With deliberate human designs, we show that some RL problems such as lavaland and MuJoCo can be solved using a model constructed with standard NN components with few parameters. Furthermore, with our fish sale auction example, we demonstrate how SRD is used to address situations that will not make sense if black-box models are used, where humanly-understandable semantic-based decision is required.

3.
IEEE Trans Neural Netw Learn Syst ; 32(11): 4793-4813, 2021 11.
Article in English | MEDLINE | ID: mdl-33079674

ABSTRACT

Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide "obviously" interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.


Subject(s)
Machine Learning/trends , Neural Networks, Computer , Pattern Recognition, Automated/trends , Surveys and Questionnaires , Artificial Intelligence/trends , Humans , Image Processing, Computer-Assisted/methods , Image Processing, Computer-Assisted/trends , Pattern Recognition, Automated/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...